全文获取类型
收费全文 | 223篇 |
免费 | 3篇 |
国内免费 | 7篇 |
专业分类
电工技术 | 3篇 |
综合类 | 7篇 |
化学工业 | 7篇 |
金属工艺 | 2篇 |
机械仪表 | 5篇 |
建筑科学 | 5篇 |
能源动力 | 4篇 |
轻工业 | 1篇 |
水利工程 | 3篇 |
石油天然气 | 1篇 |
无线电 | 19篇 |
一般工业技术 | 27篇 |
冶金工业 | 2篇 |
原子能技术 | 1篇 |
自动化技术 | 146篇 |
出版年
2024年 | 1篇 |
2023年 | 2篇 |
2022年 | 1篇 |
2021年 | 5篇 |
2020年 | 4篇 |
2019年 | 8篇 |
2018年 | 5篇 |
2017年 | 6篇 |
2016年 | 13篇 |
2015年 | 13篇 |
2014年 | 9篇 |
2013年 | 21篇 |
2012年 | 10篇 |
2011年 | 20篇 |
2010年 | 15篇 |
2009年 | 10篇 |
2008年 | 18篇 |
2007年 | 21篇 |
2006年 | 18篇 |
2005年 | 5篇 |
2004年 | 7篇 |
2002年 | 1篇 |
2001年 | 4篇 |
2000年 | 1篇 |
1999年 | 2篇 |
1998年 | 3篇 |
1996年 | 1篇 |
1995年 | 2篇 |
1994年 | 3篇 |
1993年 | 2篇 |
1985年 | 1篇 |
1982年 | 1篇 |
排序方式: 共有233条查询结果,搜索用时 15 毫秒
71.
Rudolf Ahlswede Ning Cai 《Applicable Algebra in Engineering, Communication and Computing》1993,4(4):253-261
For two matrix operations, calledquasi-direct sum andquasi-outer product, we determine their deviations from multiplicative behaviour of the rank. The second operation arises in the determination of the function table for so-called sum-type functions such as the Hamming distance. A consequence of the corresponding rank formula is, that the frequently used log rank can be a very poor bound for two-way communication complexity. Instead, as was shown in [9], a certainexponential rank gives often excellent or even optimal bounds. 相似文献
72.
Ruey-Ling Yeh Ching Liu Ben-Chang Shia Yu-Ting Cheng Ya-Fang Huwang 《Journal of Intelligent Manufacturing》2008,19(1):109-118
Data plays a vital role as a source of information to organizations, especially in times of information and technology. One
encounters a not-so-perfect database from which data is missing, and the results obtained from such a database may provide
biased or misleading solutions. Therefore, imputing missing data to a database has been regarded as one of the major steps
in data mining. The present research used different methods of data mining to construct imputative models in accordance with
different types of missing data. When the missing data is continuous, regression models and Neural Networks are used to build
imputative models. For the categorical missing data, the logistic regression model, neural network, C5.0 and CART are employed
to construct imputative models. The results showed that the regression model was found to provide the best estimate of continuous
missing data; but for categorical missing data, the C5.0 model proved the best method. 相似文献
73.
We present a new approach to clustering and visualization of the DNA microarray gene expression data. We utilize the self-organizing
map (SOM) framework for handling (dis)similarities between genes in terms of their expression characteristics. We rely on
appropriately defined distances between ranked genes-attributes, also capable of handling missing values. As a case study,
we consider breast cancer data and the gene ESR1, whose expression alterations, appearing for many of the tumor subtypes,
have been already observed to be correlated with some other significant genes. Preliminary results positively verify applicability
of our approach, although further development is definitely needed. They suggest that it may be very effective when used by
the domain experts. The algorithmic toolkit is enriched with GUI enabling the users to interactively support the SOM optimization
process. Its effectiveness is achieved by drag&drop techniques allowing for the cluster modification according to the expert
knowledge or intuition. 相似文献
74.
Arturo J. Fernández 《Computational statistics & data analysis》2006,51(2):1119-1130
Trimmed samples are widely employed in several areas of statistical practice, especially when some sample values at either or both extremes might have been contaminated. The problem of estimating the inequality and precision parameters of a Pareto distribution based on a trimmed sample and prior information is considered. From an inferential viewpoint, the problem of finding the highest posterior density (HPD) estimates of the Pareto parameters is discussed. The existence and uniqueness of the HPD estimates are established under mild conditions; explicit and accurate lower and upper bounds are also provided. Adopting a decision-theoretic perspective, several Bayesian estimators for standard loss functions are presented. In addition, two-sided and HPD credibility intervals for each Pareto parameter and joint HPD credibility regions for both parameters are derived, which have the corresponding frequentist confidence level in the noninformative case. Finally, an illustrative example concerning annual wage data is included. 相似文献
75.
J.D. Godolphin 《Computational statistics & data analysis》2006,51(3):1862-1874
If observations are lost from an experiment involving one or more forms of blocking, it can happen that the resultant design is treatment disconnected which has serious implications for the experiment. A method is described in this paper for specifying each set of observations which has the property that a treatment disconnected design will result if this observation set is missing. Some implications for two-replicate designs are described and four applications that portray the value of the method are discussed. 相似文献
76.
77.
Mostafa Ghannad-Rezaie Author Vitae Hamid Soltanian-Zadeh Author Vitae Hao Ying Author Vitae Author Vitae 《Pattern recognition》2010,43(6):2340-2350
This paper proposes a new approach based on missing value pattern discovery for classifying incomplete data. This approach is particularly designed for classification of datasets with a small number of samples and a high percentage of missing values where available missing value treatment approaches do not usually work well. Based on the pattern of the missing values, the proposed approach finds subsets of samples for which most of the features are available and trains a classifier for each subset. Then, it combines the outputs of the classifiers. Subset selection is translated into a clustering problem, allowing derivation of a mathematical framework for it. A trade off is established between the computational complexity (number of subsets) and the accuracy of the overall classifier. To deal with this trade off, a numerical criterion is proposed for the prediction of the overall performance. The proposed method is applied to seven datasets from the popular University of California, Irvine data mining archive and an epilepsy dataset from Henry Ford Hospital, Detroit, Michigan (total of eight datasets). Experimental results show that classification accuracy of the proposed method is superior to those of the widely used multiple imputations method and four other methods. They also show that the level of superiority depends on the pattern and percentage of missing values. 相似文献
78.
Consistent, spatially and temporally complete reflectance time series are required for reliable terrestrial monitoring. The Moderate Resolution Imaging Spectroradiometer (MODIS), like other polar-orbiting wide field of view satellite sensors, can provide global observations on a nearly daily basis, but the sparseness of valid observations due to cloud, residual atmospheric effects, and sensor anomalies, may result in gaps in the derived reflectance time series. This paper presents an approach for the generation of temporally complete daily MODIS 500 m nadir view BRDF-adjusted reflectance (NBAR) time series. The research is illustrated and assessed quantitatively using two years of cloud and snow screened, daily MODIS Terra and Aqua reflectance data at four sites in Africa, and demonstrated for phenology monitoring using NBAR derived NDVI time series. The components of the approach include: 1) an outlier detection algorithm to remove residual anomalous daily observations undetected in the upstream processing, 2) the dynamic generation of NBAR time series on a daily basis when seven or more observations are available for a day under consideration over a 16-day period, and 3) the means to gap fill the NBAR time series where less than 7 observations are available. The MODIS Ross-Thick/Li-Sparse-Reciprocal BRDF model is used with a rolling approach whereby a 16-day BRDF inversion window is moved on a daily overlapping basis to provide more reliable outlier detection and daily NBAR. NBAR gap filling in periods of missing observations is investigated using static land cover specific archetype BRDF parameters and using BRDF parameters defined adaptively from the temporally closest 16-day periods with 7 or more observations. Scaling factor estimators using ordinary least squares (OLS) and median-based robust least squares regression are investigated, and the robust method is demonstrated to provide on average temporally more coherent gap filled NBAR values. For regions with persistent clouds, the utility of the adaptive NBAR gap filling method is demonstrated to be severely limited due to the decreased likelihood that the surface BRDF at each gap can be described reliably. The reliability of the NBAR gap filling methodology is evaluated statistically using a cross-validation approach. For the small number of study site considered, the adaptive method is shown to provide more accurate results than the archetype method when there are more than an average of ~ 4-5 observations per 16-day window, or when a gap day is on average less than about 30 days from a 16-day period with 7 or more observations. The resulting gap free daily NBAR time series and derived daily NBAR NDVI generated by the approach is shown to capture phenological variations in a coherent temporally consistent manner, suggesting that it is a fruitful avenue for future research and validation. 相似文献
79.
Data for classification are often incomplete. The multiple-values construction method (MVCM) can be used to include data with missing values for classification. In this study, the MVCM is implemented by using fuzzy sets theory in the context of classification with discrete data. By using the fuzzy sets based MVCM, data with missing values can add values to classification, but can also introduce excessive uncertainty. Furthermore, the computational cost for the use of incomplete data could be prohibitive if the scale of missing values is large. This paper discusses the association between classification performance and the use of incomplete data. It proposes an algorithm of near-optimal use of incomplete classification data. An experiment with real-world data demonstrates the usefulness of the algorithm. 相似文献
80.
Numerous industrial and research databases include missing values. It is not uncommon to encounter databases that have up to a half of the entries missing, making it very difficult to mine them using data analysis methods that can work only with complete data. A common way of dealing with this problem is to impute (fill-in) the missing values. This paper evaluates how the choice of different imputation methods affects the performance of classifiers that are subsequently used with the imputed data. The experiments here focus on discrete data. This paper studies the effect of missing data imputation using five single imputation methods (a mean method, a Hot deck method, a Na?¨ve-Bayes method, and the latter two methods with a recently proposed imputation framework) and one multiple imputation method (a polytomous regression based method) on classification accuracy for six popular classifiers (RIPPER, C4.5, K-nearest-neighbor, support vector machine with polynomial and RBF kernels, and Na?¨ve-Bayes) on 15 datasets. This experimental study shows that imputation with the tested methods on average improves classification accuracy when compared to classification without imputation. Although the results show that there is no universally best imputation method, Na?¨ve-Bayes imputation is shown to give the best results for the RIPPER classifier for datasets with high amount (i.e., 40% and 50%) of missing data, polytomous regression imputation is shown to be the best for support vector machine classifier with polynomial kernel, and the application of the imputation framework is shown to be superior for the support vector machine with RBF kernel and K-nearest-neighbor. The analysis of the quality of the imputation with respect to varying amounts of missing data (i.e., between 5% and 50%) shows that all imputation methods, except for the mean imputation, improve classification error for data with more than 10% of missing data. Finally, some classifiers such as C4.5 and Na?¨ve-Bayes were found to be missing data resistant, i.e., they can produce accurate classification in the presence of missing data, while other classifiers such as K-nearest-neighbor, SVMs and RIPPER benefit from the imputation. 相似文献